-
-
Notifications
You must be signed in to change notification settings - Fork 653
Feat: Add usage to streamed completions #708
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Feat: Add usage to streamed completions #708
Conversation
|
Give Pint a run to format this CI and I believe its good to go. I'll just double check if the ResponseUsage can handle all the missing/nulls that will be provided here on the legacy completion endpoint. |
|
Is this not an OpenAI change? I updated my test scripts and ran this and $stream = OpenAI::completions()->createStreamed([
'model' => 'gpt-3.5-turbo-instruct',
'prompt' => 'This is a test',
]);
foreach ($stream as $response) {
dump(json_encode($response->toArray()));
} |
|
Did you try with ...
'stream_options' => [
'include_usage' => true
]
...? If that still doesn't include the usage, it could be a |
|
I just tested with openai, these are the last few chunks, if I set My json body: |
Thanks - this was it. |
What:
Description:
Currently the token usage is not included in the resulting streamed chunks from the completions API. This PR ensures that the usage is set if it is included in the response.